Skip to main content

Nonenman's group workspace

<null>

What makes this group special?
Tags

30_roberta-large_A100_24_384_2e-5_squad_v2

Notes

fine-tuning roberta-large on bipar

Author
State
Finished
Start time
January 10th, 2024 11:35:02 AM
Runtime
10m 42s
Tracked hours
10m 39s
Run path
nonenman/RC_project/sj44n0lh
OS
Linux-4.18.0-372.46.1.el8_6.x86_64-x86_64-with-glibc2.28
Python version
3.10.0
Command
/pfs/data5/home/as/as_as/as_nonenman/python_scripts/train.py --model_name roberta-large --model_checkpoint roberta-large-finetuned-squad_v2/epoch-1 --tokenizer_checkpoint roberta-large --batch_size 24 --num_train_epochs 4 --learning_rate 2e-5 --dataset bipar --i_dont_know
System Hardware
CPU count64
Logical CPU count 128
GPU count1
GPU typeNVIDIA A100 80GB PCIe
W&B CLI Version
0.14.0
Config

Config parameters are your model's inputs. Learn more

  • {} 6 keys
    • 24
    • 0.00002
    • 4
    • 384
    • 0.1
    • 0.01
Summary

Summary metrics are your model's outputs. Learn more

  • {} 14 keys
    • 1.8336588975154993
    • 58.1473968897904
    • 73.22028144460998
    • 21
    • 1,024
    • 514
    • 16
    • 24
    • 50,265
    • 0.7161948278488353
    • 0
    • 0.9399263858795166
    • 87.3157596820747
    • 10.547160520466663
Artifact Outputs

This run produced these artifacts as outputs. Total: 1. Learn more

Loading...